IS

Krishnan, Ramayya

Topic Weight Topic Terms
0.530 model models process analysis paper management support used environment decision provides based develop use using
0.481 risk risks management associated managing financial appropriate losses expected future literature reduce loss approach alternative
0.463 network networks social analysis ties structure p2p exchange externalities individual impact peer-to-peer structural growth centrality
0.453 recommendations recommender systems preferences recommendation rating ratings preference improve users frame contextual using frames sensemaking
0.448 software vendors vendor saas patch cloud release model vulnerabilities time patching overall quality delivery software-as-a-service
0.382 set approach algorithm optimal used develop results use simulation experiments algorithms demonstrate proposed optimization present
0.323 information stage stages venture policies ewom paper crowdfunding second influence revelation funding cost important investigation
0.274 consumer consumers model optimal welfare price market pricing equilibrium surplus different higher results strategy quality
0.267 methods information systems approach using method requirements used use developed effective develop determining research determine
0.253 security threat information users detection coping configuration avoidance response firm malicious attack intrusion appraisal countermeasures
0.245 knowledge transfer management technology creation organizational process tacit research study organization processes work organizations implications
0.197 process business reengineering processes bpr redesign paper research suggests provide past improvements manage enable organizations
0.189 data used develop multiple approaches collection based research classes aspect single literature profiles means crowd
0.182 electronic markets commerce market new efficiency suppliers internet changes marketplace analysis suggests b2b marketplaces industry
0.181 workflow tools set paper management specification command support formal implemented scenarios associated sequence large derived
0.177 systems information management development presented function article discussed model personnel general organization described presents finally
0.176 information types different type sources analysis develop used behavior specific conditions consider improve using alternative
0.170 technology investments investment information firm firms profitability value performance impact data higher evidence diversification industry
0.168 policy movie demand features region effort second threshold release paid number regions analyze period respect
0.165 performance results study impact research influence effects data higher efficiency effect significantly findings impacts empirical
0.160 systems information objectives organization organizational development variety needs need efforts technical organizations developing suggest given
0.150 database language query databases natural data queries relational processing paper using request views access use
0.149 modeling models model business research paradigm components using representation extension logical set existing way aspects
0.139 information systems paper use design case important used context provide presented authors concepts order number
0.135 interface user users interaction design visual interfaces human-computer navigation human need cues studies guidelines laboratory
0.134 expert systems knowledge knowledge-based human intelligent experts paper problem acquisition base used expertise intelligence domain
0.122 impact data effect set propensity potential unique increase matching use selection score results self-selection heterogeneity
0.118 supply chain information suppliers supplier partners relationships integration use chains technology interorganizational sharing systems procurement
0.117 computing end-user center support euc centers management provided users user services organizations end satisfaction applications
0.115 multiple elements process environments complex integrated interdependencies design different developing integration order approach dialogue framework
0.115 percent sales average economic growth increasing total using number million percentage evidence analyze approximately does
0.110 auctions auction bidding bidders bid combinatorial bids online bidder strategies sequential prices design price using
0.108 validity reliability measure constructs construct study research measures used scale development nomological scales instrument measurement
0.107 model research data results study using theoretical influence findings theory support implications test collected tested
0.105 decision support systems making design models group makers integrated article delivery representation portfolio include selection
0.103 results study research information studies relationship size variables previous variable examining dependent increases empirical variance
0.102 problem problems solution solving problem-solving solutions reasoning heuristic theorizing rules solve general generating complex example
0.102 product products quality used characteristics examines role provide goods customization provides offer core sell key
0.100 market competition competitive network markets firms products competing competitor differentiation advantage competitors presence dominant structure

Focal Researcher     Coauthors of Focal Researcher (1st degree)     Coauthors of Coauthors (2nd degree)

Note: click on a node to go to a researcher's profile page. Drag a node to reallocate. Number on the edge is the number of co-authorships.

Padman, Rema 2 Asvanund, Atip 1 Arora, Ashish 1 Argote, Linda 1
Bai, Xue 1 Clay, Karen 1 Callan, Jamie 1 Chen, Pei-yu 1
Duncan, George 1 Greenwald, Amy 1 Ghose, Anindya 1 Kaplan, David 1
Kannan, Karthik 1 Kim, Youngsoo 1 Kataria, Gaurav 1 Li, Xiaoping 1
May, Jerrold H. 1 Peters, James 1 Raghunathan, Srinivasan 1 Steier, David 1
Smith, Michael D. 1 Sahoo, Nachiketa 1 Telang, Rahul 1 Teland, Rahul 1
Wang, Harry Jiannan 1 Yang, Yubao 1 Zhao, J. Leon 1
accounting information systems 1 auctions 1 Bayesian network 1 business process management 1
collaborative filtering 1 computing call center 1 conditional value at risk 1 control 1
correlated failures 1 disclosure policy 1 diversification 1 downtime loss 1
efficiency 1 empirical 1 expectation maximization 1 expected loss 1
electronic markets 1 Global Query Optimization 1 game theory simulation 1 Heterogeneous Database Systems 1
hazard model 1 halo effect 1 Information Integration 1 incentives 1
internal control 1 information revelation 1 information security 1 IT problem type 1
information flow 1 information goods 1 Knowledge based systems 1 knowledge classification 1
knowledge transfer 1 learning curves 1 Model management 1 Maximal Objects 1
mathematical modeling 1 MDP 1 mixture model 1 multicomponent rating 1
market segmentation 1 Napster 1 network design 1 network externalities 1
network effects 1 Object based methods 1 open source vendors 1 Process analysis 1
peer-to-peer networks 1 patch release time 1 quality degradation 1 recommender system 1
risk management 1 Soar-Based Al Systems 1 size limitation 1 security vulnerability 1
software vendors 1 Security 1 software allocation 1 supply chain 1
Universal Relational Model 1 used goods markets 1 workflow and process management 1

Articles (11)

On Risk Management with Information Flows in Business Processes. (Information Systems Research, 2013)
Authors: Abstract:
    This article investigates the economic consequences of data errors in the information flows associated with business processes. We develop a process modeling-based methodology for managing the risks associated with such data errors. Our method focuses on the topological structure of a process and takes into account its effect on error propagation and risk mitigation using both expected loss and conditional value-at-risk risk measures. Using this method, optimal strategies can be designed for control resource allocation to manage risk in a business process. Our work contributes to the literature on both ex ante risk management-based business process design and ex post risk assessments of existing business processes and control models. This research applies not only to the literature on and practice of process design and risk management but also to business decision support systems in general. An order-fulfillment process of an online pharmacy is used to illustrate the methodology.
The Halo Effect in Multicomponent Ratings and Its Implications for Recommender Systems: The Case of Yahoo! Movies. (Information Systems Research, 2012)
Authors: Abstract:
    Collaborative filtering algorithms learn from the ratings of a group of users on a set of items to find personalized recommendations for each user. Traditionally they have been designed to work with one-dimensional ratings. With interest growing in recommendations based on multiple aspects of items, we present an algorithm for using multicomponent rating data. The presented mixture model-based algorithm uses the component rating dependency structure discovered by a structure learning algorithm. The structure is supported by the psychometric literature on the halo effect. This algorithm is compared with a set of model-based and instancebased algorithms for single-component ratings and their variations for multicomponent ratings. We evaluate the algorithms using data from Yahoo! Movies. Use of multiple components leads to significant improvements in recommendations. However, we find that the choice of algorithm depends on the sparsity of the training data. It also depends on whether the task of the algorithm is to accurately predict ratings or to retrieve relevant items. In our experiments a model-based multicomponent rating algorithm is able to better retrieve items when training data are sparse. However, if the training data are not sparse, or if we are trying to predict the rating values accurately, then the instance-based multicomponent rating collaborative filtering algorithms perform better. Beyond generating recommendations we show that the proposed model can fill in missing rating components. Theories in psychometric literature and the empirical evidence suggest that rating specific aspects of a subject is difficult. Hence, filling in the missing component values leads to the possibility of a rater support system to facilitate gathering of multicomponent ratings.
The Learning Curve of IT Knowledge Workers in a Computing Call Center. (Information Systems Research, 2012)
Authors: Abstract:
    We analyze learning and knowledge transfer in a computing call center. The information technology (IT) technical services provided by call centers are characterized by constant changes in relevant knowledge and a wide variety of support requests. Under this IT problem-solving context, we analyze the learning curve relationship between problem-solving experience and performance enhancement. Based on data collected from a university computing call center consisting of different types of consultants, our empirical findings indicate that (a) the learning effect-as measured by the reduction of average resolution time-occurs with experience, (b) knowledge transfer within a group occurs among lower-level consultants utilizing application-level knowledge (as opposed to technical-level knowledge), and (c) knowledge transfers across IT problem types. These estimates of learning and knowledge transfer contribute to the development of an empirically grounded understanding of IT knowledge workers' learning behavior. The results also have implications for operational decisions about the staffing and problem-solving strategy of call centers.
CORRELATED FAILURES, DIVERSIFICATION, AND INFORMATION SECURITY RISK MANAGEMENT. (MIS Quarterly, 2011)
Authors: Abstract:
    The article presents research on risk management related to computer network security in management information systems. A queuing model is presented to quantify the downtown loss faced by a network in a security attack and compare it to the costs of investment in computer security technologies, diversification of computer software to limit the risk of coordinated failure and investment in information technology to repair failures. Situations under which the strategy of diversification of computer software are financially advantageous are discussed.
On Evaluating Information Revelation Policies in Procurement Auctions: A Markov Decision Process Approach. (Information Systems Research, 2010)
Authors: Abstract:
    Each market session in a reverse electronic marketplace features a procurer and many suppliers. An important attribute of a market session chosen by the procurer is its information revelation policy. The revelation policy determines the information (such as the number of competitors, the winning bids, etc.) that will be revealed to participating suppliers at the conclusion of each market session. Suppliers participating in multiple market sessions use strategic bidding and fake their own cost structure to obtain information revealed at the end of each market session. The information helps to reduce two types of uncertainties encountered in future market sessions, namely, their opponents' cost structure and an estimate of the number of their competitors. Whereas the first type of uncertainty is present in physical and e-marketplaces, the second type of uncertainty naturally arises in IT-enabled marketplaces. Through their effect on the uncertainty faced by suppliers, information revelation policies influence the bidding behavior of suppliers which, in turn, determines the expected price paid by the procurer. Therefore, the choice of information revelation policy has important consequences for the procurer. This paper develops a partially observable Markov decision process model of supplier bidding behavior and uses a multiagent e-marketplace simulation to analyze the effect that two commonly used information revelation policies—complete information policy and incomplete information policy—have on the expected price paid by the procurer. We find that the expected price under the complete information policy is lower than that under the incomplete information policy. The integration of ideas from the multiagents literature, the machinelearning literature, and the economics literature to develop a method to evaluate information revelation policies in e-marketplaces is a novel feature of this paper.
An Empirical Analysis of Software Vendors' Patch Release Behavior: Impact of Vulnerability Disclosure. (Information Systems Research, 2010)
Authors: Abstract:
    A key aspect of better and more secure software is timely patch release by software vendors for the vulnerabilities in their products. Software vulnerability disclosure, which refers to the publication of vulnerability information, has generated intense debate. An important consideration in this debate is the behavior of software vendors. How quickly do vendors patch vulnerabilities and how does disclosure affect patch release time? This paper compiles a unique data set from the Computer Emergency Response Team/Coordination Center (CERT) and SecurityFocus to answer this question. Our results suggest that disclosure accelerates patch release. The instantaneous probability of releasing the patch rises by nearly two and a half times because of disclosure. Open source vendors release patches more quickly than closed source vendors. Vendors are more responsive to more severe vulnerabilities. We also find that vendors respond more slowly to vulnerabilities not disclosed by CERT. We verify our results by using another publicly available data set and find that results are consistent. We also show how our estimates can aid policy makers in their decision making.
On Data Reliability Assessment in Accounting Information Systems. (Information Systems Research, 2005)
Authors: Abstract:
    The need to ensure reliability of data in information systems has long been recognized. However, recent accounting scandals and the subsequent requirements enacted in the Sarbanes-Oxley Act have made data reliability assessment of critical importance to organizations, particularly for accounting data. Using the accounting functions of management information systems as a context, this paper develops an interdisciplinary approach to data reliability assessment. Our work builds on the literature in accounting and auditing, where reliability assessment has been a topic of study for a number of years. While formal probabilistic approaches have been developed in this literature, they are rarely used in practice. The research reported in this paper attempts to strike a balance between the informal, heuristic-based approaches used by auditors and formal, probabilistic reliability assessment methods. We develop a formal, process-oriented ontology of an accounting information system that defines its components and semantic constraints. We use the ontology to specify data reliability assessment requirements and develop mathematical-model-based decision support methods to implement these requirements. We provide preliminary empirical evidence that the use of our approach improves the efficiency and effectiveness of reliability assessments. Finally, given the recent trend toward specifying information systems using executable business process models (e.g., business process execution language), we discuss opportunities for integrating our process-oriented data reliability assessment approach--developed in the accounting context--in other IS application contexts.
Effect of Electronic Secondary Markets on the Supply Chain. (Journal of Management Information Systems, 2005)
Authors: Abstract:
    We present a model to investigate the competitive implications of electronic secondary markets that promote concurrent selling of new and used goods on a supply chain. In secondary markets where suppliers cannot directly utilize used goods for practicing intertemporal price discrimination and where transaction costs of resales are negligible, the threat of cannibalization of new goods by used goods becomes significant. We examine conditions under which it is optimal for suppliers to operate in such markets, explaining why these markets may not always be detrimental for them. Intuitively, secondary markets provide an active outlet for some high-valuation consumers to sell their used goods. The potential for such resales leads to an increase in consumers' valuation for a new good, leading them to buy an additional new good. Given sufficient heterogeneity in consumers' affinity across multiple suppliers" products, the "market expansion effect" accruing from consumers' cross-product purchase affinity can mitigate the losses incurred by suppliers from the direct "cannibalization effect." We also highlight the strategic role that the used goods commission set by the retailer plays in determining profits for suppliers. We conclude the paper by empirically testing some implications of our model using a unique data set from the online book industry, which has a flourishing secondary market.
An Empirical Analysis of Network Externalities in Peer-to-Peer Music-Sharing Networks. (Information Systems Research, 2004)
Authors: Abstract:
    Peer-to-peer (P2P) file sharing networks are an important medium for the distribution of information goods. However, there is little empirical research into the optimal design of these networks under real-world conditions. Early speculation about the behavior of P2P networks has focused on the role that positive network externalities play in improving performance as the network grows. However, negative network externalities also arise in P2P networks because of the consumption of scarce network resources or an increased propensity of users to free ride in larger networks, and the impact of these negative network externalities--while potentially important--has received far less attention. Our research addresses this gap in understanding by measuring the impact of both positive and negative network externalities on the optimal size of P2P networks. Our research uses a unique dataset collected from the six most popular OpenNap P2P networks between December 19, 2000, and April 22, 2001. W find that users contribute additional value to the network at a decreasing rate and impose costs on the network at an increasing rate, while the network increases in size. Our results also suggest that users are less likely to contribute resources to the network as the network size increases. Together, these results suggest that the optimal size of these centralized P2P networks is bounded--at some point the costs that a marginal user imposes on the network will exceed the value they provide to the network. This finding is in contrast to early predictions that larger P2P networks would always provide more value to users than smaller networks. Finally, these results also highlight the importance of considering user incentives--an important determinant of resource sharing in P2P networks--in network design.
On Heterogeneous Database Retrieval: A Cognitively Guided Approach. (Information Systems Research, 2001)
Authors: Abstract:
    Retrieving information from heterogeneous database systems involves a complex process and remains a challenging research area.We propose a cognitively guided approach for developing an information-retrieval agent that takes the user's information request, identifies relevant information sources,and generates a multidatabase access plan. Our work is distinctive in that the agent design is based on an empirical study of how human experts retrieve information from multiple, heterogeneous database systems. To improve on empirically observed information-retrieval capabilities, the design incorporates mathematical models and algorithmic components. These components optimize the set of information sources that need to be considered to respond to a user query and are used to develop efficient multidatabase-access plans. This agent design, which integrates cognitive and mathematical models, has been implemented using Soar, a knowledge-based architecture.
MODFORM: A Knowledge-based Tool to Support the Modeling Process. (Information Systems Research, 1993)
Authors: Abstract:
    The value of mathematical modeling and analysis in the decision support context is well recognized. However, the complex and evolutionary nature of the modeling process has limited its widespread use. In this paper, we describe our work on knowledge-based tools which support the formulation and revision of mathematical programming models. in contrast to previous work on this topic, we base our work on an indepth empirical investigation of experienced modelers and present three results: (a) a model of the modeling process of experienced modelers derived using concurrent verbal protocol analysis. Our analysis indicates that modeling is a synthetic process that relates specific features found in the problem to its mathematical model. These relationships. which are seldom articulated by modelers, are also used to revise models. (b) an implementation of a modeling support system called MODFORM based on this observationally derived model, and (c) the results of a preliminary experiment which indicates that users of MODFORM build models comparable to those formulated by experts. We use the formulation of mathematical programming models of production planning problems illustratively throughout the paper.